IndexError too many indices for array

by: rmrasul97@gmail.com, 8 years ago

Last edited: 8 years ago

Hello community,

I have followed the instructions on this tutorial for my custom data to train an RNN.
https://pythonprogramming.net/train-test-tensorflow-deep-learning-tutorial/

## ABOUT MY DATA: 1. I am generating multiple exponential signals
                                  2. The steepness (Time constant= tau) for each signal is converted to one hot and used as labels.

<pre class='prettyprint lang-py'>

"""Input signals"""
for X in range(no_t):

    random.seed()
    train_input= np.random.uniform(lorange,hirange)
    X= np.array(amplitude * np.exp(-t / train_input))
    print(X.shape)


"""Input labels"""
for label in range(no_tau):

    random.seed()
    tau = np.random.uniform(lorange, hirange)
    label = np.array(one_hot([int(math.ceil(tau / resolution))]))
    #print(label.shape)
</pre>

## THE PLACEHOLDERS:
<pre class='prettyprint lang-py'>
""" Input placeholders for signal and label"""
x = tf.placeholder('float', [None,n_chunks, chunk_size])
y = tf.placeholder('float')
</pre>

## RNN : I have used the same LSTM RNN as the above tutorial.

                             ## MY PROBLEM: I have created batches from the input data and labels as following. But I can't seem to understand the right number of indices for the batch array.

Here is the traceback for my error-

Traceback (most recent call last):
  File "C:/Users/raisa/PycharmProjects/RNN_1/test2.py", line 105, in <module>
    train_neural_network(x)
  File "C:/Users/raisa/PycharmProjects/RNN_1/test2.py", line 89, in train_neural_network
    batch_x = np.array(X[start:end])
IndexError: too many indices for array


How should I reshape my input?

<pre class='prettyprint lang-py'>
        for epoch in range(hm_epochs):
            epoch_loss = 0
            i = 0
            while i < no_t:
                start = i
                end = i + batch_size
                batch_x = np.array(X[start:end])
                batch_y = np.array(label[start:end])

                _, c = sess.run([optimizer, cost], feed_dict={x: batch_x,
                                                              y: batch_y})
                epoch_loss += c
                i += batch_size
</pre>





You must be logged in to post. Please login or register an account.



Strange. Are you absolutely positive there are any dimensions in X? Try printing out X right before the slice. Too many dimensions happens when you go to slice by more dimensions than you have. Here, you're slicing by 1 dimension... so... I have to wonder if X is empty or something!

-Harrison 8 years ago

You must be logged in to post. Please login or register an account.


I changed  
X= np.array(amplitude * np.exp(-t / train_input)) to np.arange which got rid of the dimension issue. Thank you :)

-rmrasul97@gmail.com 8 years ago
Last edited 8 years ago

You must be logged in to post. Please login or register an account.